Supplement to ’ Sparse recovery by thresholded non - negative least squares ’

نویسندگان

  • Martin Slawski
  • Matthias Hein
چکیده

We here provide additional proofs, definitions, lemmas and derivations omitted in the paper. Note that material contained in the latter are referred to by the captions used there (e.g. Theorem 1), whereas auxiliary statements contained exclusively in this supplement are preceded by a capital Roman letter (e.g. Theorem A.1). A Sub-Gaussian random variables and concentration inequalities A random variable Z is called sub-Gaussian if there exists a positive constant K such that E[|Z|] ≤ K√q. The smallest suchK is called the sub-Gaussian norm ‖Z‖ψ2 ofZ. If E[Z] = 0, which shall be assumed for the remainder of this paragraph, then the moment-generating function of Z satisfies E[exp(tZ)] ≤ exp(−t/(2σ)) for a parameter σ > 0 which is related to ‖Z‖ψ2 by a multiplicative constant, cf. [1]. It follows that if Z1, . . . , Zn are i.i.d. copies of Z and v ∈ R, then ∑n i=1 viZi is sub-Gaussian with parameter ‖v‖ 2 2 σ . We have the well-known tail bound P(|Z| > z) ≤ 2 exp ( − z 2 2σ2 ) , z ≥ 0. (A.1) Combining the previous two facts and using a union bound, with Z = (Z1, . . . , Zn), it follows that for any collection of vectors vj ∈ R, j = 1, . . . , p, P ( max 1≤j≤p |v> j Z| > σ max 1≤j≤p ‖vj‖2 √ 2 log p+ σz ) ≤ 2 exp ( − 2 z ) , z ≥ 0. (A.2) A.1 Bernstein-type inequality for squared sub-Gaussian random variables The following exponential inequality combines Lemma 14, Proposition 16 and Remark 18 in [1]. Lemma A. 1. Let Z1, . . . , Zn be i.i.d. centered sub-Gaussian random variables with sub-Gaussian norm K. Then for every a = (a1, . . . , an) ∈ R and every z ≥ 0, one has

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Sparse recovery by thresholded non-negative least squares

Non-negative data are commonly encountered in numerous fields, making nonnegative least squares regression (NNLS) a frequently used tool. At least relative to its simplicity, it often performs rather well in practice. Serious doubts about its usefulness arise for modern high-dimensional linear models. Even in this setting − unlike first intuition may suggest − we show that for a broad class of ...

متن کامل

Machine Learning and Non-Negative Compressive Sampling

The new emerging theory of compressive sampling demonstrates that by exploiting the structure of a signal, it is possible to sample a signal below the Nyquist rate—using random projections—and achieve perfect reconstruction. In this paper, we consider a special case of compressive sampling where the uncompressed signal is non-negative, and propose a number of sparse recovery algorithms—which ut...

متن کامل

Non-Negative Matrix Factorisation of Compressively Sampled Non-Negative Signals

The new emerging theory of Compressive Sampling has demonstrated that by exploiting the structure of a signal, it is possible to sample a signal below the Nyquist rate and achieve perfect reconstruction. In this short note, we employ Non-negative Matrix Factorisation in the context of Compressive Sampling and propose two NMF algorithms for signal recovery—one of which utilises Iteratively Rewei...

متن کامل

A Discontinuous Neural Network for Non-Negative Sparse Approximation

This paper investigates a discontinuous neural network which is used as a model of the mammalian olfactory system and can more generally be applied to solve non-negative sparse approximation problems. By inherently limiting the systems integrators to having non-negative outputs, the system function becomes discontinuous since the integrators switch between being inactive and being active. It is...

متن کامل

Thresholded Lasso for high dimensional variable selection and statistical estimation ∗

Given n noisy samples with p dimensions, where n ≪ p, we show that the multi-step thresholding procedure based on the Lasso – we call it the Thresholded Lasso, can accurately estimate a sparse vector β ∈ R in a linear model Y = Xβ + ǫ, where Xn×p is a design matrix normalized to have column l2 norm √ n, and ǫ ∼ N(0, σ2In). We show that under the restricted eigenvalue (RE) condition (Bickel-Rito...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011